人工智能网致力于为中国广大企业、企业家和商界、经济界人士,以及全球华人经济圈提供实时、严谨、专业的财经、产业新闻和信息资讯媒体。

当前位置:人工智能网 > 时政要闻 > TED学院

TED学院

来源:互联网作者:王林更新时间:2021-06-24 10:00:42阅读:

提示:点击 上方"小芳老师"免费关注哦

关注回复关键字“福利”,免费送你优质英文有声读物!

英语专业八级考试(TEM-8)的选材主要来自英美报刊杂志、广播电台或网站。其中一个包括:TED演讲,2018和2016年专八听力讲座(Mini-lecture)就来自TED演讲。建议大家平时多看多听TED演讲。

演讲者:玛格丽特·米切尔

演讲题目:如何构建对我们有利的人工智能

中英对照翻译

I work on helping computers communicate about the world around us.

我致力于协助电脑和我们周围世界的沟通。

There are a lot of ways to do this, and I like to focus on helping computers to talk about what they see and understand.

是有很多方法可以做到这一点,我喜欢专注于协助电脑去谈论它们看到和理解的内容。

Given a scene like this, a modern computer-vision algorithm can tell you that there's a woman and there's a dog.

鉴于这样的情景,一个现代的计算机视觉演算法,可以告诉你,有一个女人,还有一只狗。

It can tell you that the woman is smiling.

它可以告诉你,那个女人在微笑。

It might even be able to tell you that the dog is incredibly cute.

它甚至可以告诉你,这只狗非常可爱。

I work on this problem thinking about how humans understand and process the world.

我处理这个问题,思考人类如何理解和与世界共处。

The thoughts, memories and stories that a scene like this might evoke for humans.

那些思想、记忆和故事,在这样的场景中,可能会唤起人类的注意。

All the interconnections of related situations.

所有关连情况的相互联系。

Maybe you've seen a dog like this one before, or you've spent time running on a beach like this one, and that further evokes thoughts and memories of a past vacation, past times to the beach, times spent running around with other dogs.

也许你以前见过这样的狗,或者你曾经花时间,在这样的沙滩上跑步, 并进一步唤起过去假期的记忆和想法, 以前去海滩的时候,花在与其他狗儿跑来跑去的时间。

One of my guiding principles is that by helping computers to understand what it's like to have these experiences, to understand what we share and believe and feel, then we're in a great position to start evolving computer technology in a way that's complementary with our own experiences.

我的指导原则之一,是通过帮助电脑了解这是什么样的经历, 从而了解我们所相信的和感受的共通点, 那么我们就有能力开始不断发展计算机技术,以一种与我们经验互补的方式。

So, digging more deeply into this, a few years ago I began working on helping computers to generate human-like stories from sequences of images.

因此,深入挖掘这一点, 我几年前开始致力于从图像序列帮助电脑产生类似人类的故事。

So, one day, I was working with my computer to ask it what it thought about a trip to Australia.

所以,有一天,我正在用电脑工作时,询问它对澳大利亚之行的看法。

It took a look at the pictures, and it saw a koala.

它看了看图片,看到一只树袋熊。

It didn't know what the koala was, but it said it thought it was an interesting-looking creature.

它不知道树袋熊是什么,但电脑表示它认为树袋熊看起来是很有趣的生物。

Then I shared with it a sequence of images about a house burning down.

然后我与电脑分享一系列关于房屋烧毁的图像。

It took a look at the images and it said, "This is an amazing view! This is spectacular!"

电脑看了一下图片说:“这是个惊人的景观!这很壮观!”

It sent chills down my spine.

它使我的脊背发冷。

It saw a horrible, life-changing and life-destroying event and thought it was something positive.

电脑看到一个可怕的、改变生活和毁灭生命的事件,并认为这是积极的事情。

I realized that it recognized the contrast, the reds, the yellows, and thought it was something worth remarking on positively.

我意识到电脑认识到红色和黄色的对比,并认为这是值得积极评价的事情。

And part of why it was doing this was because most of the images I had given it were positive images.

部分原因是因为我输入电脑的大部分是积极的图像。

That's because people tend to share positive images when they talk about their experiences.

那是因为人们谈论自己的经历时,倾向于分享积极的图像。

When was the last time you saw a selfie at a funeral?

你上次在葬礼上看到自拍照是什么时候?

I realized that, as I worked on improving AI task by task, dataset by dataset, that I was creating massive gaps, holes and blind spots in what it could understand.

我了解到,当我努力在改善人工智能,一个任务一个任务、一个数据集一个数据集地改善, 结果我却在它能了解什么上创造出了大量的隔阂、漏洞以及盲点。

And while doing so, I was encoding all kinds of biases.

这么做的时候,我是在把各种偏见做编码。

Biases that reflect a limited viewpoint, limited to a single dataset biases that can reflect human biases found in the data, such as prejudice and stereotyping.

这些偏见反映出受限的观点,受限于单一数据集, 这些偏见能反应出在数据中的人类偏见,比如偏袒以及刻板印象。

I thought back to the evolution of the technology that brought me to where I was that day how the first color images were calibrated against a white woman's skin, meaning that color photography was biased against black faces.

我回头去想一路带我走到那个时点的科技演化, 第一批彩色影像如何根据一个白种女子的皮肤来做校准, 这表示,彩色照片对于黑皮肤脸孔是有偏见的。

And that same bias, that same blind spot continued well into the '90s.

同样的偏见,同样的盲点,持续涌入了20世纪90年代。

And the same blind spot continues even today in how well we can recognize different people's faces in facial recognition technology.

而同样的盲点甚至持续到现今,出现在我们对于不同人的脸部辨识能力中,在人脸辨识技术中。

I thought about the state of the art in research today, where we tend to limit our thinking to one dataset and one problem.

我思考了现今在研究上发展水平,我们倾向会把我们的思路限制在一个数据集或一个问题上。

And that in doing so, we were creating more blind spots and biases that the AI could further amplify.

这么做时,我们就会创造出更多盲点和偏见, 它们可能会被人工智能给放大。

I realized then that we had to think deeply about how the technology we work on today looks in five years, in 10 years.

那时我了解到,我们必须要深入思考我们现今努力发展的科技,在五年、十年后会是什么样子。

Humans evolve slowly, with time to correct for issues in the interaction of humans and their environment.

人类演进很慢,有时间可以去修正在人类互动以及其环境中的议题。

In contrast, artificial intelligence is evolving at an incredibly fast rate.

相对的,人工智能的演进速度非常快。

And that means that it really matters that we think about this carefully right now that we reflect on our own blind spots, our own biases, and think about how that's informing the technology we're creating and discuss what the technology of today will mean for tomorrow.

那就意味着,很重要的是我们现在要如何仔细思考这件事, 我们要反省我们自己的盲点,我们自己的偏见, 并想想它们带给我们所创造出的科技什么样的信息,并讨论现今的科技在将来代表的是什么含义。

CEOs and scientists have weighed in on what they think the artificial intelligence technology of the future will be.

对于未来的人工智能应该是什么样子,首席执行官和科学家的意见是很有份量的。

Stephen Hawking warns that "Artificial intelligence could end mankind."

史蒂芬·霍金警告过:“人工智能可能终结人类。”

Elon Musk warns that it's an existential risk and one of the greatest risks that we face as a civilization.

伊隆·马斯克警告过,它是个生存风险,也是我们人类文明所面临最大的风险之一。

Bill Gates has made the point, "I don't understand why people aren't more concerned."

比尔·盖兹有个论点:“我不了解为什么人们不更关心一点。”

But these views -- they're part of the story.

但这些看法--它们是故事的一部分。

The math, the models, the basic building blocks of artificial intelligence are something that we call access and all work with.

数学、模型、人工智能的基础材料,是我们所有人都能够取得并使用的。

We have open-source tools for machine learning and intelligence that we can contribute to.

我们有机器学习和智慧用的开放原始码工具,我们都能对其做出贡献。

And beyond that, we can share our experience.

在那之外,我们可以分享我们的经验。

We can share our experiences with technology and how it concerns us and how it excites us.

分享关于科技、它如何影响我们、它如何让我们兴奋的经验。

We can discuss what we love.

我们可以讨论我们所爱的。

We can communicate with foresight about the aspects of technology that could be more beneficial or could be more problematic over time.

我们能带着远见来交流,谈谈关于科技有哪些面向,随着时间发展可能可以更有帮助,或可能产生问题。

If we all focus on opening up the discussion on AI with foresight towards the future, this will help create a general conversation and awareness about what AI is now, what it can become and all the things that we need to do in order to enable that outcome that best suits us.

若我们都能把焦点放在开放地带着对未来的远见来讨论人工智能, 这就能创造出一般性的谈话和意识,关于人工智能现在是什么样子、 它未来可以变成什么样子,以及所有我们需要做的事,以产生出最适合我们的结果。

We already see and know this in the technology that we use today.

我们已经在现今我们所使用的科技中看见这一点了。

We use smart phones and digital assistants and Roombas.

我们使用智能手机、数字助理以及扫地机器人。

Are they evil? Maybe sometimes. Are they beneficial? Yes, they're that, too.

它们邪恶吗?也许有时候。它们有帮助吗?是的,这也是事实。

And they're not all the same. And there you already see a light shining on what the future holds.

并且它们并非全都一样的。你们已经看到未来可能性的一丝光芒。

The future continues on from what we build and create right now.

未来延续的基础,是我们现在所建立和创造的。

We set into motion that domino effect that carves out AI's evolutionary path.

我们开始了骨牌效应,刻划出了人工智能的演进路径。

In our time right now, we shape the AI of tomorrow.

我们在现在这时代形塑未来的人工智能。

Technology that immerses us in augmented realities bringing to life past worlds.

让我们能沉浸入扩增实境中的科技,让过去的世界又活了过来。

Technology that helps people to share their experiences when they have difficulty communicating.

协助人们在沟通困难时还能分享经验的科技。

Technology built on understanding the streaming visual worlds used as technology for self-driving cars.

立基在了解串流视觉世界之上的科技,被用来当作自动驾驶汽车的科技。

Technology built on understanding images and generating language, evolving into technology that helps people who are visually impaired be better able to access the visual world.

立基在了解图像和产生语言的科技, 演进成协助视觉损伤者的科技,让他们更能进入视觉的世界。

And we also see how technology can lead to problems.

我们也看到了科技如何导致问题。

We have technology today that analyzes physical characteristics we're born with such as the color of our skin or the look of our face -- in order to determine whether or not we might be criminals or terrorists.

现今,我们有科技能够分析我们天生的身体特征, 比如肤色或面部的外观,可以用来判断我们是否有可能是罪犯或恐怖份子。

We have technology that crunches through our data, even data relating to our gender or our race, in order to determine whether or not we might get a loan.

我们有科技能够分析我们的数据,甚至和我们的性别或种族相关的资料,来决定我们的贷款是否能被核准。

All that we see now is a snapshot in the evolution of artificial intelligence.

我们现在所看见的一切,都是人工智能演进的约略写照。

Because where we are right now, is within a moment of that evolution.

因为我们现在所处的位置,是在那演进的一个时刻当中。

That means that what we do now will affect what happens down the line and in the future.

那就意味着,我们现在所做的,会影响到后续未来发生的事。

If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now.

如果我们想让人工智能的演进方式是对人类有帮助的, 那么我们现在就得要定义目标和策略,来让那条路成为可能。

What I'd like to see is something that fits well with humans, with our culture and with the environment.

我想要看见的东西是要能够和人类、我们的文化及我们的环境能非常符合的东西。

Technology that aids and assists those of us with neurological conditions or other disabilities in order to make life equally challenging for everyone.

这种科技要能够帮助和协助有神经系统疾病或其他残疾者的人, 让人生对于每个人的挑战程度是平等的。

Technology that works regardless of your demographics or the color of your skin.

这种科技的运作不会考虑你的人口统计资料或你的肤色。

And so today, what I focus on is the technology for tomorrow and for 10 years from now.

所以,现今我着重的是明日的科技和十年后的科技。

AI can turn out in many different ways.

产生人工智能的方式相当多。

But in this case, it isn't a self-driving car without any destination.

但在这个情况中,它并不是没有目的地的自动驾驶汽车。

This is the car that we are driving. We choose when to speed up and when to slow down.

它是我们在开的汽车。我们选择何时要加速何时要减速。

We choose if we need to make a turn. We choose what the AI of the future will be.

我们选择是否要转弯。我们选择将来的人工智能会是哪一种。

There's a vast playing field of all the things that artificial intelligence can become. It will become many things.

人工智能能够变成各式各样的东西。它会变成许多东西。

And it's up to us now, in order to figure out what we need to put in place to make sure the outcomes of artificial intelligence are the ones that will be better for all of us. Thank you.

现在,决定权在我们,我们要想清楚我们得要准备什么, 来确保人工智能的结果会是对所有人都更好的结果。 谢谢

Remark:一切权益归TED所有,更多TED相关信息可至官网www.ted.com查询!

声明:除特别注明原创授权转载文章外,其他文章均为转载,版权归原作者或平台所有。如有侵权,请后台联系,告知删除,谢谢

TED学院合集

1

2

3

4

5

标题:TED学院

地址:http://ai.rw2015.com/szyw/9633.html

免责声明:人工智能网为网民提供实时、严谨、专业的财经、产业新闻和信息资讯,更新的内容来自于网络,不为其真实性负责,只为传播网络信息为目的,非商业用途,如有异议请及时联系站长,本网站将立即予以删除!。

返回顶部